91,112 research outputs found

    Co-production of contrastive prosodic focus and manual gestures : temporal coordination and effects on the acoustic and articulatory correlates of focus

    No full text
    International audienceSpeech, and prosody in particular, is tightly linked to manual gestures. This study investigates the coordination of prosodic contrastive focus and different manual gestures (pointing, beat and control gestures). We used motion capture on ten speakers to explore this issue. The results show that prosodic focus "attracts" the manual gesture whichever its type, the temporal alignment being stricter for pointing and mainly realized between the apex of the pointing gesture and articulatory vocalic targets. Moreover, it appears that the production of a gesture, whichever its type, does not affect the acoustic and articulatory correlates of prosodic focus

    Alzheimer's hand gestures and speech disorders in spoken and sung modalities

    No full text
    International audienceIn Alzheimer's disease (AD), studies on bimodal language production like that of Carlomagno et al. (2005) do not treat aspects of speech and hand gestures in a concomitant way. We propose an original protocol to evaluate the correlation between hand gestures and articulatory gestures in speech and singing of 4 persons diagnosed with AD by our hospital partner and paired with 4 control participants. Participants were asked to repeat 8 nursery rhymes created for this protocol: 4 nursery rhymes were spoken while the other 4 were sung. In each sung or spoken modality, 2 nursery rhymes were completed with four iconic and two deictic hand gestures. The protocol was completed with several clinical tests. Speech apraxia was evaluated by means of the MT86 clinical protocol (Joanette et al., 1998) and manual praxis by the Mahieux's battery (Mahieux-Laurent et al., 2009). The MBLF software was adapted to test the bucco-linguo-facial motor skills (Gatignol& Lannadère, 2011). Each participant was recorded at home using a camcorder and a lapel microphone. The speech productions were annotated and analyzed via Praat©, and the hand gestures via ELAN©. We did not evidence any manual or speech apraxia in our patient population. However, significant differences were observed on productions of hand gestures and speech between the patients and the control participants. Regarding patients, the movement, configuration and orientation of hand gestures were slightly altered. This alteration seemed to depend on the gesture value but not on the modality (speech or singing). The requirement to produce specific hand gestures affected speech production: patients produced more errors with connected hand gestures. Speech productions were influenced at different degrees by spoken and sung modalities. Patients made more errors in singing, and the more with connected hand gestures, showing a double task effect likely due to an attention deficiency typical of AD (Siéroff & Piquard, 2004)

    Sign Language Tutoring Tool

    Full text link
    In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.Comment: eNTERFACE'06. Summer Workshop. on Multimodal Interfaces, Dubrovnik : Croatie (2007

    Gesture’s Neural Language

    Get PDF
    When people talk to each other, they often make arm and hand movements that accompany what they say. These manual movements, called “co-speech gestures,” can convey meaning by way of their interaction with the oral message. Another class of manual gestures, called “emblematic gestures” or “emblems,” also conveys meaning, but in contrast to co-speech gestures, they can do so directly and independent of speech. There is currently significant interest in the behavioral and biological relationships between action and language. Since co-speech gestures are actions that rely on spoken language, and emblems convey meaning to the effect that they can sometimes substitute for speech, these actions may be important, and potentially informative, examples of language–motor interactions. Researchers have recently been examining how the brain processes these actions. The current results of this work do not yet give a clear understanding of gesture processing at the neural level. For the most part, however, it seems that two complimentary sets of brain areas respond when people see gestures, reflecting their role in disambiguating meaning. These include areas thought to be important for understanding actions and areas ordinarily related to processing language. The shared and distinct responses across these two sets of areas during communication are just beginning to emerge. In this review, we talk about the ways that the brain responds when people see gestures, how these responses relate to brain activity when people process language, and how these might relate in normal, everyday communication

    A fast algorithm for vision-based hand gesture recognition for robot control

    Get PDF
    We propose a fast algorithm for automatically recognizing a limited set of gestures from hand images for a robot control application. Hand gesture recognition is a challenging problem in its general form. We consider a fixed set of manual commands and a reasonably structured environment, and develop a simple, yet effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture. The algorithm is invariant to translation, rotation, and scale of the hand. We demonstrate the effectiveness of the technique on real imagery

    Gesture Detection Towards Real-Time Ergonomic Analysis for Intelligent Automation Assistance

    Get PDF
    Manual handling involves transporting of load by hand through lifting or lowering and operators on the manufacturing shop floor are daily faced with constant lifting and lowering operations which leads to Work-Related Musculoskeletal Disorders. The trend in data collection on the Shop floor for ergonomic evaluation during manual handling activities has revealed a gap in gesture detection as gesture triggered data collection could facilitate more accurate ergonomic data capture and analysis. This paper presents an application developed to detect gestures towards triggering real-time human motion data capture on the shop floor for ergonomic evaluations and risk assessment using the Microsoft Kinect. The machine learning technology known as the discrete indicator—precisely the AdaBoost Trigger indicator was employed to train the gestures. Our results show that the Kinect can be trained to detect gestures towards real-time ergonomic analysis and possibly offering intelligent automation assistance during human posture detrimental tasks

    Second Language Acquisition of American Sign Language Influences Co-speech Gesture Production

    Get PDF
    Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL–English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results indicate susceptibility of the co-speech gesture system to modification by extensive sign language experience, regardless of the age of acquisition. L2 signers produced more ASL signs and more handshape varieties than early signers, suggesting less separation between the ASL lexicon and the co-speech gesture system for L2 signers
    corecore